AI-Board (AIB)

The AI Board of the Department of Media and Communication provides advice and support on the use of artificial intelligence. This includes questions relating to research, teaching, and academic self-administration. Enquiries of any kind can be sent at any time.

A brief introduction to the AI Board of the IfKW

Artificial intelligence is increasingly shaping research, teaching, and administration—and at the same time raising new ethical, legal, and practical scientific questions. The AI Board of the Department of Media and Communication supports students, teachers, and researchers in using AI responsibly, transparently, and in compliance with regulations. The AIB's recommendations are based on fundamental academic principles such as autonomy, responsibility, transparency, replicability, fairness, and diversity. At the same time, the board takes into account the specific challenges of AI systems – such as possible hallucinations, bias, lack of fact-checking, or data security issues.

The AI Board sees itself as a low-threshold point of contact for all issues relating to the use of AI: from didactic questions and the development of teaching and learning materials to the design of research processes and the clarification of rules for examinations. Enquiries of any kind can be sent to aib@ifkw.lmu.de at any time.

Guidelines on the use of AI for study, research, and teaching

Algorithmic information processing is part of everyday work, learning, and teaching. Teachers use the new technical possibilities for planning their seminars and lectures. This also includes training students in the use of algorithm- and language model-based digital agents, raising their awareness and enabling them to use these tools. That is why students at the IfKW are allowed to use AI for their studies – but only under clear rules that serve to protect scientific integrity.

Landmarks for Orientation

All use of AI in examination or research contexts must be disclosed (tool, prompt, purpose). Missing documentation is considered a violation of good academic practice.

Personal data, interview transcripts, unpublished manuscripts, or copyrighted texts must not be entered into AI systems unless explicit consent has been obtained and processing is legally permissible.

AI may support learning and research processes but does not replace individual academic work. Originality, critical thinking, and scholarly judgment remain the responsibility of the user.

Anyone using AI is fully responsible—legally and academically—for errors, bias, misinformation, and any consequences arising from AI output.

All AI-generated outputs must be critically reviewed for factual accuracy, potential bias or discriminatory content, scientific soundness, and social appropriateness.

Generative AI consumes significantly more energy than conventional digital tools. Domain-specific tools or locally hosted models are more resource-efficient and should be preferred where possible.

Members of the AI Board (AIB)

Prof. Dr. Mario Haim

Professor

Computational Communication Science • Political Communication • Computational Journalism

Julian Hohner, M.A.

Academic Staff

Computational Social Science • Radicalisation and Extremism Research • Visual Communication

Prof. Dr. Benjamin Krämer

Professor

Media reception • media history and media change • political communication • music in the media

Dr. Larissa Leonhard

Academic Staff

Digital Literacy • Formative Media Experiences • Media Reception and Impact

Anea Meinert

Academic Staff

political communication • terrorism coverage • issue ownership

Justin T. Schröder

Academic Staff

Science Communication • Trust in Science • Multimodal Communication • Gender in the Context of Science

Prof. Dr. Neil Thurman

Professor

AI and automation in journalism • The online behaviour of media audiences • Journalists’ routines and attitudes

Send an email

+49 89 2180-9449

+49 89 2180-9429